The course seems interesting but quite hard. Let’s see. I’m hoping to learn how to use GitHub and R Markdown and better my skills with R. I heard about the course through an email send by my doctoral school.
https://github.com/SarKin-bit/IODS-project
# This is a so-called "R chunk" where you can write R code.
date()
## [1] "Wed Nov 18 15:44:56 2020"
The text continues here…
learning2014<- read.delim("learning2014.txt")
dim(learning2014)
## [1] 166 7
str(learning2014)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : chr "F" "M" "F" "M" ...
## $ Age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ Attitude: int 37 31 25 35 37 38 35 29 38 21 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ Points : int 25 12 24 10 22 21 21 31 24 26 ...
There are 166 observations (rows) and 7 variables (columns). Columns are called gender, age, attitude, deep, stra, surf and points. All columns include data in numerical form, except column “gender”, which includes only characters.
Graphical overview of the data: Initialize plot with data and aesthetic mapping.
library(ggplot2)
p1 <- ggplot(learning2014, aes(x = Attitude, y = Points, col = gender))
Define the visualization type (points).
p2 <- p1 + geom_point()
Draw the plot.
p2
Add a regression line.
p3 <- p2 + geom_smooth(method = "lm")
p3
## `geom_smooth()` using formula 'y ~ x'
Summaries of the variables in the data.
summary(learning2014)
## gender Age Attitude deep
## Length:166 Min. :17.00 Min. :14.00 Min. :1.583
## Class :character 1st Qu.:21.00 1st Qu.:26.00 1st Qu.:3.333
## Mode :character Median :22.00 Median :32.00 Median :3.667
## Mean :25.51 Mean :31.43 Mean :3.680
## 3rd Qu.:27.00 3rd Qu.:37.00 3rd Qu.:4.083
## Max. :55.00 Max. :50.00 Max. :4.917
## stra surf Points
## Min. :1.250 Min. :1.583 Min. : 7.00
## 1st Qu.:2.625 1st Qu.:2.417 1st Qu.:19.00
## Median :3.188 Median :2.833 Median :23.00
## Mean :3.121 Mean :2.787 Mean :22.72
## 3rd Qu.:3.625 3rd Qu.:3.167 3rd Qu.:27.75
## Max. :5.000 Max. :4.333 Max. :33.00
There seems to be a somewhat positive correlation here but distributions of the variables are scattered quite a bit and some are quite far from the regression line, meaning that the fit of the model is not perfect.
Setting three variables as explanatory variables and fitting a regression model where exam points is the (dependent) variable:
Create an plot matrix with ggpairs().
library(GGally)
## Registered S3 method overwritten by 'GGally':
## method from
## +.gg ggplot2
library(ggplot2)
ggpairs(learning2014, lower = list(combo = wrap("facethist", bins = 20)))
Create a regression model with multiple explanatory variables.
my_model2 <- lm(Points ~ Attitude + stra + deep, data = learning2014)
Print out a summary of the model.
summary(my_model2)
##
## Call:
## lm(formula = Points ~ Attitude + stra + deep, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.5239 -3.4276 0.5474 3.8220 11.5112
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.39145 3.40775 3.343 0.00103 **
## Attitude 0.35254 0.05683 6.203 4.44e-09 ***
## stra 0.96208 0.53668 1.793 0.07489 .
## deep -0.74920 0.75066 -0.998 0.31974
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.289 on 162 degrees of freedom
## Multiple R-squared: 0.2097, Adjusted R-squared: 0.195
## F-statistic: 14.33 on 3 and 162 DF, p-value: 2.521e-08
Relationship with the explanatory variable “Attitude” and response variable “Points” is statistically significant (p<0.05). Relationship with the explanatory variables “stra” and “deep” are not statistically significant with response variable “Points” (p>0.05).
Redo the model without the variables that had no statistical significance. I did a simple regression as two of the explanatory variables had no significant effect.
A scatter plot of points versus attitude.
qplot(Attitude, Points, data = learning2014) + geom_smooth(method = "lm")
## `geom_smooth()` using formula 'y ~ x'
Fit a linear model.
my_model3 <- lm(Points ~ Attitude, data = learning2014)
summary(my_model3)
##
## Call:
## lm(formula = Points ~ Attitude, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.9763 -3.2119 0.4339 4.1534 10.6645
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.63715 1.83035 6.358 1.95e-09 ***
## Attitude 0.35255 0.05674 6.214 4.12e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared: 0.1906, Adjusted R-squared: 0.1856
## F-statistic: 38.61 on 1 and 164 DF, p-value: 4.119e-09
Relationship between the target variable (Points) and the explanatory variable (Attitude) is statistically significant (p<0.05). Multiple R squared is 0.1906. It is quite low (compared to 1), which means that the data is not that close to the fitted regression line.This indicates that the model explains only some of the variability of the response data around its mean. The model does not seem to fit the data very well.
Draw diagnostic plots Residuals vs Fitted values, Normal QQ-plot and Residuals vs Leverage.
par(mfrow = c(2,2))
plot(my_model2, which = c(1,2,5))
Assumptions of the model are of that of linear regression:Linear relationship, multivariate normality, no or little multicollinearity, no auto-correlation and homoscedasticity.
In the Residuals vs Fitted values the red line is not perfectly flat, which indicates that there is discernible non-linear trend to the residuals. The residuals do not appear to be equally distributed across the entire range of the fitted values.
In the Normal QQ-plot the we can see thet the residuals are not normally distributed as the residual devide from the diagonal line.
In the Residuals vs Leverage there are no cases beyond the Cook’s distance lines which means that there is not any influential cases, i.e. there are no influential outliers.
date()
## [1] "Wed Nov 18 15:45:50 2020"
Here we go again…
alc<-read.table(file = "alc.txt", sep="\t", header=TRUE)
dim(alc)
## [1] 382 35
str(alc)
## 'data.frame': 382 obs. of 35 variables:
## $ school : chr "GP" "GP" "GP" "GP" ...
## $ sex : chr "F" "F" "F" "F" ...
## $ age : int 18 17 15 15 16 16 16 17 15 15 ...
## $ address : chr "U" "U" "U" "U" ...
## $ famsize : chr "GT3" "GT3" "LE3" "GT3" ...
## $ Pstatus : chr "A" "T" "T" "T" ...
## $ Medu : int 4 1 1 4 3 4 2 4 3 3 ...
## $ Fedu : int 4 1 1 2 3 3 2 4 2 4 ...
## $ Mjob : chr "at_home" "at_home" "at_home" "health" ...
## $ Fjob : chr "teacher" "other" "other" "services" ...
## $ reason : chr "course" "course" "other" "home" ...
## $ nursery : chr "yes" "no" "yes" "yes" ...
## $ internet : chr "no" "yes" "yes" "yes" ...
## $ guardian : chr "mother" "father" "mother" "mother" ...
## $ traveltime: int 2 1 1 1 1 1 1 2 1 1 ...
## $ studytime : int 2 2 2 3 2 2 2 2 2 2 ...
## $ failures : int 0 0 2 0 0 0 0 0 0 0 ...
## $ schoolsup : chr "yes" "no" "yes" "no" ...
## $ famsup : chr "no" "yes" "no" "yes" ...
## $ paid : chr "no" "no" "yes" "yes" ...
## $ activities: chr "no" "no" "no" "yes" ...
## $ higher : chr "yes" "yes" "yes" "yes" ...
## $ romantic : chr "no" "no" "no" "yes" ...
## $ famrel : int 4 5 4 3 4 5 4 4 4 5 ...
## $ freetime : int 3 3 3 2 3 4 4 1 2 5 ...
## $ goout : int 4 3 2 2 2 2 4 4 2 1 ...
## $ Dalc : int 1 1 2 1 1 1 1 1 1 1 ...
## $ Walc : int 1 1 3 1 2 2 1 1 1 1 ...
## $ health : int 3 3 3 5 5 5 3 1 1 5 ...
## $ absences : int 5 3 8 1 2 8 0 4 0 0 ...
## $ G1 : int 2 7 10 14 8 14 12 8 16 13 ...
## $ G2 : int 8 8 10 14 12 14 12 9 17 14 ...
## $ G3 : int 8 8 11 14 12 14 12 10 18 14 ...
## $ alc_use : num 1 1 2.5 1 1.5 1.5 1 1 1 1 ...
## $ high_use : logi FALSE FALSE TRUE FALSE FALSE FALSE ...
Access the tidyverse libraries tidyr, dplyr, ggplot2.
library(tidyr); library(dplyr); library(ggplot2)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
Glimpse at the alc data.
glimpse(alc)
## Rows: 382
## Columns: 35
## $ school <chr> "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP"…
## $ sex <chr> "F", "F", "F", "F", "F", "M", "M", "F", "M", "M", "F", "F"…
## $ age <int> 18, 17, 15, 15, 16, 16, 16, 17, 15, 15, 15, 15, 15, 15, 15…
## $ address <chr> "U", "U", "U", "U", "U", "U", "U", "U", "U", "U", "U", "U"…
## $ famsize <chr> "GT3", "GT3", "LE3", "GT3", "GT3", "LE3", "LE3", "GT3", "L…
## $ Pstatus <chr> "A", "T", "T", "T", "T", "T", "T", "A", "A", "T", "T", "T"…
## $ Medu <int> 4, 1, 1, 4, 3, 4, 2, 4, 3, 3, 4, 2, 4, 4, 2, 4, 4, 3, 3, 4…
## $ Fedu <int> 4, 1, 1, 2, 3, 3, 2, 4, 2, 4, 4, 1, 4, 3, 2, 4, 4, 3, 2, 3…
## $ Mjob <chr> "at_home", "at_home", "at_home", "health", "other", "servi…
## $ Fjob <chr> "teacher", "other", "other", "services", "other", "other",…
## $ reason <chr> "course", "course", "other", "home", "home", "reputation",…
## $ nursery <chr> "yes", "no", "yes", "yes", "yes", "yes", "yes", "yes", "ye…
## $ internet <chr> "no", "yes", "yes", "yes", "no", "yes", "yes", "no", "yes"…
## $ guardian <chr> "mother", "father", "mother", "mother", "father", "mother"…
## $ traveltime <int> 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 3, 1, 2, 1, 1, 1, 3, 1, 1…
## $ studytime <int> 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 3, 1, 2, 3, 1, 3, 2, 1, 1…
## $ failures <int> 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0…
## $ schoolsup <chr> "yes", "no", "yes", "no", "no", "no", "no", "yes", "no", "…
## $ famsup <chr> "no", "yes", "no", "yes", "yes", "yes", "no", "yes", "yes"…
## $ paid <chr> "no", "no", "yes", "yes", "yes", "yes", "no", "no", "yes",…
## $ activities <chr> "no", "no", "no", "yes", "no", "yes", "no", "no", "no", "y…
## $ higher <chr> "yes", "yes", "yes", "yes", "yes", "yes", "yes", "yes", "y…
## $ romantic <chr> "no", "no", "no", "yes", "no", "no", "no", "no", "no", "no…
## $ famrel <int> 4, 5, 4, 3, 4, 5, 4, 4, 4, 5, 3, 5, 4, 5, 4, 4, 3, 5, 5, 3…
## $ freetime <int> 3, 3, 3, 2, 3, 4, 4, 1, 2, 5, 3, 2, 3, 4, 5, 4, 2, 3, 5, 1…
## $ goout <int> 4, 3, 2, 2, 2, 2, 4, 4, 2, 1, 3, 2, 3, 3, 2, 4, 3, 2, 5, 3…
## $ Dalc <int> 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1…
## $ Walc <int> 1, 1, 3, 1, 2, 2, 1, 1, 1, 1, 2, 1, 3, 2, 1, 2, 2, 1, 4, 3…
## $ health <int> 3, 3, 3, 5, 5, 5, 3, 1, 1, 5, 2, 4, 5, 3, 3, 2, 2, 4, 5, 5…
## $ absences <int> 5, 3, 8, 1, 2, 8, 0, 4, 0, 0, 1, 2, 1, 1, 0, 5, 8, 3, 9, 5…
## $ G1 <int> 2, 7, 10, 14, 8, 14, 12, 8, 16, 13, 12, 10, 13, 11, 14, 16…
## $ G2 <int> 8, 8, 10, 14, 12, 14, 12, 9, 17, 14, 11, 12, 14, 11, 15, 1…
## $ G3 <int> 8, 8, 11, 14, 12, 14, 12, 10, 18, 14, 12, 12, 13, 12, 16, …
## $ alc_use <dbl> 1.0, 1.0, 2.5, 1.0, 1.5, 1.5, 1.0, 1.0, 1.0, 1.0, 1.5, 1.0…
## $ high_use <lgl> FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, FAL…
Use gather() to gather columns into key-value pairs and then glimpse() at the resulting data.
gather(alc) %>% glimpse
## Rows: 13,370
## Columns: 2
## $ key <chr> "school", "school", "school", "school", "school", "school", "sc…
## $ value <chr> "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP…
Draw a bar plot of each variable.
gather(alc) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar()
There are 382 observations (rows) and 35 variables (columns). Variables in the data are for example school, sex, age, adress, family size, family educational support, free time after school and internet access at home. For more detailed explanation of the variables, please visit: https://archive.ics.uci.edu/ml/datasets/Student+Performance
Chosen interesting variables: student’s grade, school, home address type and student’s health.
Hypothesis 1: There is a relationship between alcohol use and student’s grade. Hypothesis 2: There is a relationship between alcohol use and student’s absences. Hypothesis 3: There is a relationship between alcohol use and student’s health. Hypothesis 4: There is a relationship between alcohol use and student’s quality of family relationships.
Hypothesis 1: There is a relationship between alcohol use and student’s grade.
attach(alc)
table(G3,high_use,sex)
## , , sex = F
##
## high_use
## G3 FALSE TRUE
## 0 0 0
## 2 0 0
## 3 0 0
## 4 3 0
## 5 3 0
## 6 11 2
## 7 3 1
## 8 14 3
## 9 2 1
## 10 25 6
## 11 10 6
## 12 33 10
## 13 8 3
## 14 15 3
## 15 10 1
## 16 13 4
## 17 2 2
## 18 4 0
##
## , , sex = M
##
## high_use
## G3 FALSE TRUE
## 0 1 1
## 2 0 1
## 3 1 0
## 4 3 1
## 5 1 2
## 6 4 5
## 7 1 1
## 8 5 5
## 9 1 3
## 10 13 20
## 11 6 4
## 12 21 17
## 13 11 4
## 14 16 4
## 15 5 0
## 16 17 4
## 17 3 0
## 18 3 0
library(ggplot2)
Initialise a plot of high_use and G3
g1 <- ggplot(alc, aes(x = high_use, y = G3, col = sex))
Define the plot as a boxplot and draw it.
g1 + geom_boxplot() + ylab("grade")
Initialise a plot of high use and sex.
g2<-ggplot(data = alc, aes(x = high_use, fill = sex))
Define the plot as a bar lot and draw it.
g2 + geom_bar()+facet_wrap("sex")
Male students who have high usage of alcohol (use a lot of alcohol) have lower grades than men who do not use a lot of alcohol. For female students, the high use of alcohol does not affect the grade.
Hypothesis 2: There is a relationship between alcohol use and student’s absences.
attach(alc)
## The following objects are masked from alc (pos = 3):
##
## absences, activities, address, age, alc_use, Dalc, failures,
## famrel, famsize, famsup, Fedu, Fjob, freetime, G1, G2, G3, goout,
## guardian, health, high_use, higher, internet, Medu, Mjob, nursery,
## paid, Pstatus, reason, romantic, school, schoolsup, sex, studytime,
## traveltime, Walc
table(G3,absences,sex)
## , , sex = F
##
## absences
## G3 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 16 17 18 19 20 21 26 27 29 44
## 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 4 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 5 0 0 2 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 6 5 3 0 1 1 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 7 0 0 0 1 0 0 0 0 0 2 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
## 8 1 2 4 1 0 2 2 1 0 0 1 0 0 1 0 0 0 0 0 0 2 0 0 0 0
## 9 0 0 2 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 10 5 4 4 1 4 1 1 1 1 1 2 0 0 0 2 1 0 0 0 2 0 0 0 0 0
## 11 2 2 1 2 1 3 0 0 2 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1
## 12 5 5 14 5 2 1 2 1 5 2 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
## 13 0 2 2 2 1 0 1 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 14 2 4 0 3 3 2 0 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
## 15 3 0 4 1 1 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
## 16 5 1 3 0 1 2 3 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
## 17 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
## 18 2 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## absences
## G3 45
## 0 0
## 2 0
## 3 0
## 4 0
## 5 0
## 6 0
## 7 0
## 8 0
## 9 0
## 10 1
## 11 0
## 12 0
## 13 0
## 14 0
## 15 0
## 16 0
## 17 0
## 18 0
##
## , , sex = M
##
## absences
## G3 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 16 17 18 19 20 21 26 27 29 44
## 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 2 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 3 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
## 4 0 0 1 0 1 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 5 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 6 1 1 1 3 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0
## 7 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 8 1 1 0 0 4 0 0 0 1 0 1 0 2 0 0 0 0 0 0 0 0 0 0 0 0
## 9 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0
## 10 4 4 5 2 3 4 2 1 2 2 0 0 1 0 2 0 1 0 0 0 0 0 0 0 0
## 11 1 2 0 1 1 2 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
## 12 8 8 4 3 2 1 1 2 2 3 1 1 0 0 1 0 0 1 0 0 0 0 0 0 0
## 13 2 2 2 5 3 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
## 14 4 2 3 4 2 0 3 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
## 15 0 1 0 1 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 16 5 4 3 1 2 1 2 0 2 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 17 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## 18 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## absences
## G3 45
## 0 0
## 2 0
## 3 0
## 4 0
## 5 0
## 6 0
## 7 0
## 8 0
## 9 0
## 10 0
## 11 0
## 12 0
## 13 0
## 14 0
## 15 0
## 16 0
## 17 0
## 18 0
library(ggplot2)
Initialise a plot of high_use and absences.
g3 <- ggplot(alc, aes(x = high_use, y = absences, col = sex))
Define the plot as a boxplot and draw it.
g3 + geom_boxplot() + ggtitle("Student absences by alcohol consumption")
For male students, high users of alcohol have more absences from school. For female students, the number of absences is quite similar weather they use high amount of alcohol or not.
Hypothesis 3: There is a relationship between alcohol use and student’s health. Initialise a plot of high_use and health.
g4 <- ggplot(alc, aes(x = high_use, y = health, col = sex))
Define the plot as a boxplot and draw it.
g4 + geom_boxplot() + ggtitle("Student health by alcohol consumption")
For male students, their health score was similar weather they were high users of alcohol or not. For female students, the health score was higher for those students who were high users of alcohol, surprisingly. In high alcohol users there was a lot more variation though.
Hypothesis 4: There is a relationship between alcohol use and student’s quality of family relationships. Initialise a plot of high_use and health.
g4 <- ggplot(alc, aes(x = high_use, y = goout, col = sex))
Define the plot as a boxplot and draw it.
g4 + geom_boxplot() + ggtitle("Going out with friends by alcohol consumption")
Both male and female high alcohol users go out with friends more than low alcohol users.
Find the model with glm().
m <- glm(high_use ~ absences + G3 + health + goout, data = alc, family = "binomial")
Print out a summary of the model.
summary(m)
##
## Call:
## glm(formula = high_use ~ absences + G3 + health + goout, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.8599 -0.7560 -0.5555 0.9414 2.3160
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.60300 0.77329 -4.659 3.17e-06 ***
## absences 0.07508 0.02197 3.417 0.000633 ***
## G3 -0.04081 0.03843 -1.062 0.288295
## health 0.12650 0.09087 1.392 0.163897
## goout 0.72684 0.11849 6.134 8.57e-10 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 398.95 on 377 degrees of freedom
## AIC: 408.95
##
## Number of Fisher Scoring iterations: 4
Print out the coefficients of the model.
coef(m)
## (Intercept) absences G3 health goout
## -3.60300340 0.07507726 -0.04080848 0.12650463 0.72684009
Absences and going out with friends are highly significant predictor of the probability of being a high user of alcohol. Health and grade are not significant predictor of the probability of being a high user of alcohol.
Present and interpret the coefficients of the model as odds ratios and provide confidence intervals for them. Find the model with glm().
m <- glm(high_use ~ absences + G3 + health + goout, data = alc, family = "binomial")
Compute odds ratios (OR).
OR <- coef(m) %>% exp
Compute confidence intervals (CI).
CI <- confint(m) %>% exp
## Waiting for profiling to be done...
Print out the odds ratios with their confidence intervals
cbind(OR, CI)
## OR 2.5 % 97.5 %
## (Intercept) 0.02724178 0.005694834 0.118851
## absences 1.07796743 1.034131788 1.128425
## G3 0.96001298 0.890279078 1.035492
## health 1.13485470 0.951360927 1.359672
## goout 2.06853388 1.649082084 2.626747
Absences, health, and going out with friends are all positively associated with high use of alcohol whereas grades is negatively associated with high alcohol use. Every absence causes the student 8 % more likely to be a high user of alcohol. For every increase in the health score the student is 13.5% more likely to be a high user of alcohol. Each time the student goes out with friends, they are 107 % more likely to be a high user of alcohol. For every increase in the grade the student is 4 % less likely to be a high user of alcohol.
Fit the model.
m <- glm(high_use ~ absences + goout, data = alc, family = "binomial")
Exploring the predictive power of the model.
Fit the model using only the variables that had a statistical relationship with high/low alcohol consumption.
m <- glm(high_use ~ absences + goout, data = alc, family = "binomial")
Predict() the probability of high_use.
probabilities <- predict(m, type = "response")
Add the predicted probabilities to ‘alc’.
alc <- mutate(alc, probability = probabilities)
Use the probabilities to make a prediction of high_use.
alc <- mutate(alc, prediction = probability > 0.5)
See the last ten original classes, predicted probabilities, and class predictions.
select(alc, failures, absences, sex, high_use, probability, prediction) %>% tail(10)
## failures absences sex high_use probability prediction
## 373 1 0 M FALSE 0.10204808 FALSE
## 374 1 7 M TRUE 0.28913349 FALSE
## 375 0 1 F FALSE 0.20395640 FALSE
## 376 0 6 F FALSE 0.27356268 FALSE
## 377 1 2 F FALSE 0.11705469 FALSE
## 378 0 2 F FALSE 0.36613754 FALSE
## 379 2 2 F FALSE 0.11705469 FALSE
## 380 0 3 F FALSE 0.06419413 FALSE
## 381 0 4 M TRUE 0.58446429 TRUE
## 382 0 2 M TRUE 0.05971939 FALSE
Tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 245 23
## TRUE 67 47
23 times the prediction is “high alcohol use” when the variable is not “high alcohol use”. 67 times the prediction is not “high alcohol use” when the variable is “high alcohol use”.
Access dplyr and ggplot2.
library(dplyr); library(ggplot2)
A graphic visualizing both the actual values and the predictions.
Initialize a plot of ‘high_use’ versus ‘probability’ in ‘alc’.
g <- ggplot(alc, aes(x = probability, y = high_use, col = prediction))
Define the geom as points and draw the plot.
g + geom_point()
Tabulate the target variable versus the predictions.
table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table %>% addmargins
## prediction
## high_use FALSE TRUE Sum
## FALSE 0.64136126 0.06020942 0.70157068
## TRUE 0.17539267 0.12303665 0.29842932
## Sum 0.81675393 0.18324607 1.00000000
According to the prediction, 82% of all the students are not high alcohol users. According to the actual values 70% of all the students are not high alcohol users. There is a quite big difference between the prediction and the actual model. Compute the total proportion of inaccurately classified individuals (the training error).
Define a loss function (average prediction error).
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
Call loss_func to compute the average number of wrong predictions in the (training) data.
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2356021
The training error is about 24%, which shows that the accuracy of the model is about 76%.
Exercise 4 Clustering and classification.
#Access the MASS package.
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
#Load the data.
data("Boston")
#Explore the dataset.
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
#Plot matrix of the variables.
pairs(Boston)
There are 506 observations (rows) and 14 variables (columns) in the dataset. For more details of the dataset, please visit https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/Boston.html
library(dplyr)
library(corrplot)
## corrplot 0.84 loaded
#Calculate the correlation matrix and round it.
cor_matrix<-cor(Boston) %>% round(digits = 2)
#Print the correlation matrix.
cor_matrix
## crim zn indus chas nox rm age dis rad tax ptratio
## crim 1.00 -0.20 0.41 -0.06 0.42 -0.22 0.35 -0.38 0.63 0.58 0.29
## zn -0.20 1.00 -0.53 -0.04 -0.52 0.31 -0.57 0.66 -0.31 -0.31 -0.39
## indus 0.41 -0.53 1.00 0.06 0.76 -0.39 0.64 -0.71 0.60 0.72 0.38
## chas -0.06 -0.04 0.06 1.00 0.09 0.09 0.09 -0.10 -0.01 -0.04 -0.12
## nox 0.42 -0.52 0.76 0.09 1.00 -0.30 0.73 -0.77 0.61 0.67 0.19
## rm -0.22 0.31 -0.39 0.09 -0.30 1.00 -0.24 0.21 -0.21 -0.29 -0.36
## age 0.35 -0.57 0.64 0.09 0.73 -0.24 1.00 -0.75 0.46 0.51 0.26
## dis -0.38 0.66 -0.71 -0.10 -0.77 0.21 -0.75 1.00 -0.49 -0.53 -0.23
## rad 0.63 -0.31 0.60 -0.01 0.61 -0.21 0.46 -0.49 1.00 0.91 0.46
## tax 0.58 -0.31 0.72 -0.04 0.67 -0.29 0.51 -0.53 0.91 1.00 0.46
## ptratio 0.29 -0.39 0.38 -0.12 0.19 -0.36 0.26 -0.23 0.46 0.46 1.00
## black -0.39 0.18 -0.36 0.05 -0.38 0.13 -0.27 0.29 -0.44 -0.44 -0.18
## lstat 0.46 -0.41 0.60 -0.05 0.59 -0.61 0.60 -0.50 0.49 0.54 0.37
## medv -0.39 0.36 -0.48 0.18 -0.43 0.70 -0.38 0.25 -0.38 -0.47 -0.51
## black lstat medv
## crim -0.39 0.46 -0.39
## zn 0.18 -0.41 0.36
## indus -0.36 0.60 -0.48
## chas 0.05 -0.05 0.18
## nox -0.38 0.59 -0.43
## rm 0.13 -0.61 0.70
## age -0.27 0.60 -0.38
## dis 0.29 -0.50 0.25
## rad -0.44 0.49 -0.38
## tax -0.44 0.54 -0.47
## ptratio -0.18 0.37 -0.51
## black 1.00 -0.37 0.33
## lstat -0.37 1.00 -0.74
## medv 0.33 -0.74 1.00
#Visualize the correlation matrix.
corrplot(cor_matrix, method="circle", type="upper", cl.pos="b", tl.pos="d", tl.cex = 0.6)
Positive correlations are displayed in blue and negative correlations in red color. Color intensity and the size of the circle are proportional to the correlation coefficients. There is a high negative correlation between indus (proportion of non-retail business acres per town) and dis (weighted mean of distances to five Boston employment centres), nox (nitrogen oxides concentration (parts per 10 million)) and dis (weighted mean of distances to five Boston employment centres), age (proportion of owner-occupied units built prior to 1940) and dis (weighted mean of distances to five Boston employment centres) and istat (lower status of the population (percent)) and medv (median value of owner-occupied homes in $1000s). There is a high positive correlation between rad (index of accessibility to radial highways) and tax (full-value property-tax rate per $10,000).
Standardize the dataset, create a categorical variable of the crime rate, divide the dataset to train and test sets.
#Center and standardize variables.
boston_scaled <- scale(Boston)
#Summaries of the scaled variables.
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
#Class of the boston_scaled object.
class(boston_scaled)
## [1] "matrix" "array"
#Change the object to data frame.
boston_scaled <- as.data.frame(boston_scaled)
After scaling the mean is 0 for all the variables which means that all variables are normally distributed.
#Summary of the scaled crime rate.
summary(boston_scaled$crim)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -0.419367 -0.410563 -0.390280 0.000000 0.007389 9.924110
#Create a quantile vector of crim and print it.
bins <- quantile(boston_scaled$crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
#Create a categorical variable 'crime'.
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))
#Look at the table of the new factor crime.
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
#Remove original crim from the dataset.
boston_scaled <- dplyr::select(boston_scaled, -crim)
#Add the new categorical value to scaled data.
boston_scaled <- data.frame(boston_scaled, crime)
Divide the dataset to train and test sets, so that 80% of the data belongs to the train set.
#Number of rows in the Boston dataset.
n <- nrow(boston_scaled)
#Choose randomly 80% of the rows.
ind <- sample(n, size = n * 0.8)
#Create train set.
train <- boston_scaled[ind,]
#Create test set.
test <- boston_scaled[-ind,]
#Save the correct classes from test data.
correct_classes <- test$crime
#Remove the crime variable from test data.
test <- dplyr::select(test, -crime)
Fit the linear discriminant analysis on the train set.
#Linear discriminant analysis.
lda.fit <- lda(crime ~ ., data = train)
#Print the lda.fit object.
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2500000 0.2574257 0.2351485 0.2574257
##
## Group means:
## zn indus chas nox rm age
## low 0.95614857 -0.9042266 -0.11640431 -0.8841092 0.4816917 -0.8510500
## med_low -0.05743812 -0.2884397 -0.04518867 -0.5954046 -0.1025284 -0.3562754
## med_high -0.37801632 0.1473854 0.10065938 0.3244773 0.1177812 0.3721883
## high -0.48724019 1.0170690 -0.08304540 1.0333936 -0.4993279 0.8033220
## dis rad tax ptratio black lstat
## low 0.8839980 -0.6896375 -0.7402637 -0.45828740 0.38312402 -0.794373104
## med_low 0.4311544 -0.5379445 -0.4687774 -0.06073831 0.32450649 -0.162546639
## med_high -0.3771852 -0.3991753 -0.2988896 -0.26486964 0.08482024 -0.001018583
## high -0.8517771 1.6386213 1.5144083 0.78135074 -0.81076358 0.926105882
## medv
## low 0.535668572
## med_low 0.004378617
## med_high 0.222934475
## high -0.671942130
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.0775637149 0.69848452 -0.9294836267
## indus -0.0024650632 -0.26276960 0.4212837415
## chas -0.0362772346 0.02583705 0.0724948140
## nox 0.5126183917 -0.80012447 -1.3268646945
## rm -0.0007980922 0.01710810 -0.1583536099
## age 0.1927481305 -0.19595178 -0.0551474951
## dis -0.0914894714 -0.19402508 0.3505698401
## rad 3.0998597071 0.92810222 0.0005510124
## tax 0.0558564532 0.06509186 0.4314811981
## ptratio 0.1262937979 0.01227679 -0.2366434106
## black -0.1160698743 0.02604043 0.1059276691
## lstat 0.2047897209 -0.28478786 0.3019380850
## medv 0.1257139226 -0.52123725 -0.2074541909
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9544 0.0333 0.0123
#The function for lda biplot arrows.
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
#Target classes as numeric.
classes <- as.numeric(train$crime)
#Plot the lda results.
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)
Predicting with the model.
#Predict classes with test data.
lda.pred <- predict(lda.fit, newdata = test)
#Cross tabulate the results.
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 15 10 1 0
## med_low 3 13 6 0
## med_high 0 13 17 1
## high 0 0 0 23
Note that the numbers change every time you run the model. The model predicted the high crime rates well (32/32 were classified correctly). Other categories the model did not predict as well: For medium high crime rates, 17/29 were classified correctly, for medium low crime rates 17/25 were classified correctly, and for low crime rates 9/16 were classified correctly.
total <- c(9+5+2+3+17+5+10+17+2+32)
total
## [1] 102
correct <- c(9+17+17+32)
correct
## [1] 75
Out of a total of 102 observations, 75 observations were classified correctly.
ratio <- c(correct/total)
ratio
## [1] 0.7352941
Accuracy of the model was 74%, which could be better.
Reload Boston dataset.
library(MASS)
data("Boston")
#Center and standardize variables.
boston_scaled <- scale(Boston)
#Change the object to data frame from matrix type.
boston_scaled <- as.data.frame(boston_scaled)
#Calculate the Euclidean distances between observations.
dist_eu <- dist(boston_scaled)
#Look at the summary of the distances.
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
Run k-means algorithm on the dataset.
#K-means clustering.
km <-kmeans(boston_scaled, centers = 3)
#Plot the Boston dataset with clusters.
pairs(boston_scaled, col = km$cluster)
#Investigate the optimal number of clusters and run the algorithm again.
set.seed(123)
#Determine the number of clusters.
k_max <- 10
#Calculate the total within sum of squares.
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
#Visualize the results with qplot. Visualize (with qplot) the total WCSS when the number of cluster goes from 1 to 10.
library(ggplot2)
qplot(x = 1:k_max, y = twcss, geom = 'line')
2 clusters seems optimal as the bend (knee) is at 2.
#Run kmeans() again with two clusters.
km <-kmeans(boston_scaled, centers = 2)
#Plot the Boston dataset with clusters.
pairs(boston_scaled, col = km$cluster)
Like observed before, the optimal number of clusters seems to be two.
Super bonus.
model_predictors <- dplyr::select(train, -crime)
#Check the dimensions.
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
#Matrix multiplication.
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
#Matrix multiplication.
library(plotly)
##
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
#Create a 3D plot of the columns of the matrix product by typing the code below.
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers')
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.
#Add argument color as an argument in the plot_ly() function.
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$crime)
Draw another 3D plot where the color is defined by the clusters of the k-means.
#Make a k-means with 4 clusters to compare the methods.
km3D <-kmeans(boston_scaled, centers = 4)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = km3D$cluster[ind])